AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Mixed Precision Quantization

# Mixed Precision Quantization

Qwen3 235B A22B Mixed 3 6bit
Apache-2.0
This is a mixed 3-6bit quantized version converted from the Qwen/Qwen3-235B-A22B model, optimized for efficient inference on the Apple MLX framework.
Large Language Model
Q
mlx-community
100
2
FLUX.1 Dev Q8 Fp16 Fp32 Mix 8 To 32 Bpw Gguf
Other
Experimental GGUF-converted version of Flux.1-dev, featuring various mixed-precision quantization schemes
Text-to-Image
F
mo137
257
12
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase